
Protecting your practice: cybersecurity defenses in the age of generative AI
Generative AI offers many benefits for physicians, but also brings new risks of data breaches
The
The hack has been so disruptive to physicians that the U.S. Department of Health and Human Services has
Generative AI’s risks to physicians
According to the
Physicians are vulnerable for a few other reasons. They are increasingly using telemedicine, raising concerns about the security of video consultations and transmitting patient data over less secure networks. And here’s a very big issue:
But generative AI is also fraught with risk. Here’s why:
Data sensitivity: generative AI models are often trained on massive amounts of sensitive patient data. Any vulnerability in AI systems could expose this data to breaches.
Third-party risks: many physicians use cloud-based generative AI tools. This introduces reliance on third-party security measures, so vulnerabilities in those vendors become risk points.
Model manipulation: bad actors could potentially manipulate the training data or the AI models themselves. This could lead to incorrect clinical notes, biased research results, or the generation of harmful content.
Integration vulnerabilities: integrating generative AI tools into existing healthcare systems creates additional points of entry for cyberattacks. If these integrations aren’t secure, patient data could be jeopardized.
For example, let’s look at the hypothetical example of Sarah Wilson, M.D., a busy cardiologist who relies on generative AI for tasks like clinical notes, and an AI chatbot for patient scheduling and FAQs. However, her reliance proves costly when she unknowingly downloads malware disguised as a realistic email from her IT department. This malware infects her AI systems. During a patient appointment, her clinical note generator suggests unusual, incorrect information. Realizing something is amiss, she discovers the malware has not only manipulated the AI’s training data but also allowed patient records to be exposed on the dark web.
Why would someone do that? Many times, the motivation is a ransomware attack that will require Dr. Wilson to pay an enormous amount of money to prevent additional disruptions. Or a malicious actor could have an axe to grind against physicians. Whatever the reason, Dr. Wilson’s practice is now in a world of hurt.
How physicians can protect themselves
How might Dr. Wilson and physicians like her safeguard herself and her practice? Here are some recommended steps:
- Anticipate generative AI risks. Effective cybersecurity defense will always come down to anticipating how bad actors work so as to stay a step ahead of them. This means thinking like they think and fighting fire with fire. Tools such as Caldera and Counterfeit help any business (including health care organizations) test generative AI.
- Implement a zero-trust architecture (ZTA). With ZTA, access to data is granted on a need-to-know basis and is constantly verified. A business employing ZTA protects its systems with a far greater level of rigor. Companies such as NVIDIA offer
tools to help businesses implement ZTA . - Embrace
data loss prevention (DLP). DLP prevents the unauthorized use of sensitive information. DLP classifies sensitive data, monitors channels and devices for behavior that might indicate data is being shared or accessed inappropriately, and prevents data loss. - Test your security with Purple Teaming.
Purple Teaming is a collaborative approach that strengthens an organization’s security posture by having a single team simulate both cyberattacks and defenses. This allows for more realistic and comprehensive breach simulations - Do regular security audits: conduct security audits on AI systems and their integration points with existing healthcare IT infrastructure.
- Train staff: educate physicians and staff members on the cybersecurity risks associated with generative AI and how to identify potential threats.
Cybersecurity attacks are not going away just as crime remains a reality. But physicians and health care organizations can take steps to protect themselves as they adopt generative AI.
Sanjay Bhakta, MBA, is vice president and head of solutions at
Newsletter
Stay informed and empowered with Medical Economics enewsletter, delivering expert insights, financial strategies, practice management tips and technology trends — tailored for today’s physicians.


















